2,204 research outputs found

    Corporate social responsibility and stock price crash risk

    Get PDF
    This study investigates whether corporate social responsibility (CSR) mitigates or contributes to stock price crash risk. Crash risk, defined as the conditional skewness of return distribution, captures asymmetry in risk and is important for investment decisions and risk management. If socially responsible firms commit to a high standard of transparency and engage in less bad news hoarding, they would have lower crash risk. However, if managers engage in CSR to cover up bad news and divert shareholder scrutiny, CSR would be associated with higher crash risk. Our findings support the mitigating effect of CSR on crash risk. We find that firms\u27 CSR performance is negatively associated with future crash risk after controlling for other predictors of crash risk. The result holds after we account for potential endogeneity. Moreover, the mitigating effect of CSR on crash risk is more pronounced when firms have less effective corporate governance or a lower level of institutional ownership. The results are consistent with the notion that firms that actively engage in CSR also refrain from bad news hoarding behavior and thus reducing crash risk. This role of CSR is particularly important when governance mechanisms, such as monitoring by boards or institutional investors, are weak. JEL classification: G14; G30; M14; M4

    Intellectual Property Law and International Arbitration

    Get PDF

    Forest Aboveground Biomass Estimation Using Multi-Source Remote Sensing Data in Temperate Forests

    Get PDF
    Forests are a crucial part of global ecosystems. Accurately estimating aboveground biomass (AGB) is important in many applications including monitoring carbon stocks, investigating forest degradation, and designing sustainable forest management strategies. Remote sensing techniques have proved to be a cost-effective way to estimate forest AGB with timely and repeated observations. This dissertation investigated the use of multiple remotely sensed datasets for forest AGB estimation in temperate forests. We compared the performance of Landsat and lidar data—individually and fused—for estimating AGB using multiple regression models (MLR), Random Forest (RF) and Geographically Weight Regression (GWR). Our approach showed MLR performed similarly to GWR and both were better than RF. Integration of lidar and Landsat inputs outperformed either data source alone. However, although lidar provides valuable three-dimensional forest structure information, acquiring comprehensive lidar coverage is often cost prohibitive. Thus we developed a lidar sampling framework to support AGB estimation from Landsat images. We compared two sampling strategies—systematic and classification-based—and found that the systematic sampling selection method was highly dependent on site conditions and had higher model variability. The classification-based lidar sampling strategy was easy to apply and provides a framework that is readily transferable to new study sites. The performance of Sentinel-2 and Landsat 8 data for quantifying AGB in a temperate forest using RF regression was also tested. We modeled AGB using three datasets: Sentinel-2, Landsat 8, and a pseudo dataset that retained the spatial resolution of Sentinel-2 but only the spectral bands that matched those on Landsat 8. We found that while RF model parameters impact model outcomes, it is more important to focus attention on variable selection. Our results showed that the incorporation of red-edge information increased AGB estimation accuracy by approximately 6%. The additional spatial resolution improved accuracy by approximately 3%. The variable importance ranks in the RF regression model showed that in addition to the red- edge bands, the shortwave infrared bands were important either individually (in the Sentinel-2 model) or in band indices. With the growing availability of remote sensing datasets, developing tools to appropriately and efficiently apply remote sensing data is increasingly important

    Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    Full text link
    Recently exciting progress has been made on protein contact prediction, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation. Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction. Finally, in recent blind CAMEO benchmark our method successfully folded 5 test proteins with a novel fold

    THE IMPACT OF INSTITUTIONAL HOLDING AND BANK LEVERAGE ON STOCK RETURN VOLATILITY

    Get PDF
    This paper analyses the relation between stock return volatility and institutional holdings and company\u27s leverage in the US banking industry in the period 1980 to 2013.  We find that institutional holdings and bank leverage have a negative relationship with stock return volatility.  Our results are not driven only by cross-sectional variation as we find that bank characteristics such as size, age and ROE are significant ina  fixed-effect specification. &nbsp

    Does eliminating the Form 20-F reconciliation from IFRS to U.S. GAAP have capital market consequences?

    Get PDF
    This paper investigates the capital market consequences of the SEC\u27s decision to eliminate the reconciliation requirement for cross-listed companies following International Financial Reporting Standards (IFRS). We find no evidence that the elimination has a negative impact on firms\u27 market liquidity or probability of informed trading (PIN). We also find no evidence of a significant impact on cost of equity, analyst forecasts, institutional ownership, stock price efficiency and synchronicity. Moreover, IFRS users do not increase disclosure frequency nor supply the reconciliation voluntarily. Our results do not support the argument that eliminating the reconciliation results in information loss or greater information asymmetry. JEL classification: M41; G15; G1

    Prompt Optimization of Large Language Model for Interactive Tasks without Gradient and Demonstrations

    Full text link
    Large language models (LLMs) have demonstrated remarkable language proficiency, but they face challenges when solving interactive tasks independently. Existing methods either rely on gradient access, which is often inaccessible in state-of-the-art LLMs like GPT-4, or necessitate diverse and high-quality in-context demonstrations. In this study, we propose LLM-PO, a novel approach that enables LLMs to address these tasks without gradient access or extensive demonstrations. The key idea is to maintain a text-based plan and ask LLMs to reflect on pros and cons of the current plan based on experience collected with it, to update the plan, and to collect more experiences with the new plan. Experiments on HotpotQA demonstrate that LLM-PO achieves higher or on par success rates compared to in-context learning (ICL) baselines while requiring less inference cost.Comment: Draft. Work in Progres
    • …
    corecore